Search Results: "edward"

8 October 2013

Daniel Kahn Gillmor: Unaccountable surveillance is wrong

As I mentioned earlier, the information in the documents released by Edward Snowden show a clear pattern of corporate and government abuse of the information networks that are now deeply intertwined with the lives of many people all over the world. Surveillance is a power dynamic where the party doing the spying has power over the party being surveilled. The surveillance state that results when one party has "Global Cryptologic Dominance" is a seriously bad outcome. The old saw goes "power corrupts, and absolute power corrupts absolutely". In this case, the stated goal of my government appears to be absolute power in this domain, with no constraint on the inevitable corruption. If you are a supporter of any sort of a just social contract (e.g. International Principles on the Application of Human Rights to Communications Surveillance), the situation should be deeply disturbing. One of the major sub-threads in this discussion is how the NSA and their allies have actively tampered with and weakened the cryptographic infrastructure that everyone relies on for authenticated and confidential communications on the 'net. This kind of malicious work puts everyone's communication at risk, not only those people who the NSA counts among their "targets" (and the NSA's "target" selection methods are themselves fraught with serious problems). The US government is supposed to take pride in the checks and balances that keep absolute power out of any one particular branch. One of the latest attempts to simulate "checks and balances" was the President's creation of a "Review Group" to oversee the current malefactors. The review group then asked for public comment. A group of technologists (including myself) submitted a comment demanding that the review group provide concrete technical details to independent technologists. Without knowing the specifics of how the various surveillance mechanisms operate, the public in general can't make informed assessments about what they should consider to be personally safe. And lack of detailed technical knowledge also makes it much harder to mount an effective political or legal opposition to the global surveillance state (e.g. consider the terrible Clapper v. Amnesty International decision, where plaintiffs were denied standing to sue the Director of National Intelligence because they could not demonstrate that they were being surveilled). It's also worth noting that the advocates for global surveillance do not themselves want to be surveilled, and that (for example) the NSA has tried to obscure as much of their operations as possible, by over-classifying documents, and making spurious claims of "national security". This is where the surveillance power dynamic is most baldly in play, and many parts of the US government intelligence and military apparatus has a long history of acting in bad faith to obscure its activities. The people who have been operating these surveillance systems should be ashamed of their work, and those who have been overseeing the operation of these systems should be ashamed of themselves. We need to better understand the scope of the damage done to our global infrastructure so we can repair it if we have any hope of avoiding a complete surveillance state in the future. Getting the technical details of these compromises in the hands of the public is one step on the path toward a healthier society. PostscriptLest I be accused of optimism, let me make clear that fixing the technical harms is necessary, but not sufficient; even if our technical infrastructure had not been deliberately damaged, or if we manage to repair it and stop people from damaging it again, far too many people still regularly accept ubiquitous private (corporate) surveillance. Private surveillance organizations (like Facebook and Google) are too often in a position where their business interests are at odds with their users' interests, and powerful adversaries can use a surveillance organization as a lever against weaker parties. But helping people to improve their own data sovereignty and to avoid subjecting their friends and allies to private surveillance is a discussion for a separate post, i think.Tags: cryptography, nsa

6 September 2013

Daniel Pocock: Latest NSA revelations no surprise, but...

The latest NSA revelations confirm the systematic manner in which the NSA exploits known and potentially unknown backdoors to work around cryptography. This is not surprising, I speculated briefly about the OpenBSD and Windows NSA key incidents when Snowden originally came into view. Even two days ago, a detailed piece in Wired Magazine predicted many of the things that are in today's headlines., although Schneier indicates in The Guardian he had seen some of the material before publication. What we really need to know If the NSA and CIA have had these capabilities, just how much did they really know before and during the financial crisis that kicked off in 2007? Could the crisis have been averted? It was already known that the Greek parliament was wired, was the NSA involved in similar shenanigans? This could well be part of the Snowden revelations that we are yet to see. In fact, it is this last issue that could be the most damaging for "US interests". If countries (or their citizens) receive such undeniable evidence of a CIA conspiracy exposed by Snowden in broad daylight and they refuse to pay debts then it could lead to major re-alignment of international politics, alliances and treaties. Syria would be just the tip of the iceberg.

15 August 2013

Ingo Juergensmann: Exim4 and TLS with GMX/Web.de

Due to the unveiling of the NSA surveillance by Edward Snowden, some German mail providers decided last week to use TLS when sending mails out. For example GMX and Web.de. Usually there shouldn't be a problem with that, but it seems as if the Debian package of Exim4 (exim4-daemon-heavy) doesn't support any TLS ciphers that those providers will accept. The Debian package uses GnuTLS for TLS and there is Bug #446036 that asks for compilation against OpenSSL instead. Anyway, maybe it's something in my config as I don't use the Debian config but my own /etc/exim4/exim4.conf. Here are the TLS related parts:
tls_advertise_hosts = *
tls_certificate = /etc/exim4/ssl.crt/webmail-ssl.crt
tls_privatekey = /etc/exim4/ssl.key/webmail-server.key
That's my basic setup. After discovering that GMX and Web.de cannot send mails anymore, I added some more, following the Exim docs (it's commented out, because I don't use GnuTLS anymore):
#tls_dhparam = /etc/exim4/gnutls-params-2236
#tls_require_ciphers = $ if == $received_port 25 \
# NORMAL:%COMPAT \
# SECURE128
But still I got this kind of errors:
2013-08-14 22:49:27 TLS error on connection from mout.gmx.net [212.227.17.21] (gnutls_handshake): Could not negotiate a supported cipher suite.
As this didn't help either, I recompiled exim4-daemon-heavy against OpenSSL and et voila, it worked again. So, the question is if there's any way to get it working with GnuTLS ? Does the default Debian config work and if so, why? And if not, can a decision be made to use OpenSSL instead of GnuTLS? Reading the bug report it seems as if there are exemptions for linking against OpenSSL , so GPL wouldn't be violated. UPDATE 16.08.2013:
I reinstalled the GnuTLS version of exim4-daemon-heavy to test the recommendation in the comments with explicit tls_require_chiphers settings, but with no luck:
#tls_require_ciphers = SECURE256
#tls_require_ciphers = SECURE128
#tls_require_ciphers = NORMAL
These all resulted in the usual "(gnutls_handshake): Could not negotiate a supported cipher suite." error when trying one by one cipher setting. UPDATE 2 16.08.2013:
There was a different report about recent GnuTLS problem on the debian-user-german mailing list. It's not the same cause, but might be related.
Kategorie:

6 August 2013

Daniel Pocock: DebConf next week, free real-time communications session

DebConf13 is just about to get under way. While last year's event involved an exciting visit to Nicaragua, this year it is right here in Switzerland making it very convenient for many of those in the very populous community of free software enthusiasts across western Europe.
A long queue has already formed on the freeway going north from Italy into Switzerland's Gotthard Tunnel: did somebody put Debian's 20th birthday party on a social networking site? The big invitation On the final day of DebConf, Saturday 17 August, there is an afternoon track dedicated to the topic of free real-time communications. While DebConf is traditionally aimed at the Debian Developer community, real-time communications, by definition, requires cross-platform collaboration in order to succeed and it is a particularly hot topic thanks to recent revelations about communications privacy. Therefore, there is a wider invitation for visitors to attend this session, please just remember to register your details here, it is free (and private). To get a feel for the topic, please see the video of the Free, Open and Secure Communications panel from FOSDEM 2013 Seeing Switzerland
Who said Switzerland is expensive? Swiss Postbus is offering free wifi now Please see some of my previous blogs about Swiss travel for videos and pictures if you are looking for ideas for your time in .ch

16 July 2013

Gunnar Wolf: Open Repositories 2013 #OR2013 Charlottetown, P.E.I., Canada

I just came back home from the Open Repositories 2013 conference, in Charlottetown, Prince Edward Island, Canada; a conference on Open Access publishing, digital repositories, preservation strategies... It was quite an interesting conference, and gave me the opportunity to meet several interesting people. Mostly worthy of note, I spent some good time with the team behind the EPrints software, which powers my institute's repository, and with whom I expect to do some work trying to get EPrints more in shape to be considered as uploadable to Debian. I presented a (very non-technical) talk titled RAD-UNAM: Genesis and evolution of a repository administrators group, describing the experiences we have had at our group in UNAM setting up a federated repository (link to the talk in the OR2013 site). It was a very good experience as well as a nice trip. Oh And if you come over to my blog, you will see here the photos I took during the week of a very nice, little Canadian city.

12 July 2013

Daniel Pocock: Practical VPNs with strongSwan, Shorewall, Linux firewalls and OpenWRT routers

There is intense interest in communications privacy at the moment thanks to the Snowden scandal. Open source software has offered credible solutions for privacy and encryption for many years. Sadly, making these solutions work together is not always plug-and-play. In fact, secure networking, like VoIP, has been plagued by problems with interoperability and firewall/filtering issues although now the solutions are starting to become apparent. Here I will look at some of them, such as the use of firewall zones to simplify management or the use of ECDSA certificates to avoid UDP fragmentation problems. I've drawn together a lot of essential tips from different documents and mailing list discussions to demonstrate how to solve a real-world networking problem. A typical network scenario and requirements Here is a diagram of the network that will be used to help us examine the capabilities of these open source solutions. Some comments about the diagram:
  • The names in square brackets are the zones for Shorewall, they are explained later.
  • The dotted lines are IPsec tunnels over the untrusted Internet.
  • The road-warrior users (mobiles and laptops) get virtual IP addresses from the VPN gateway. The branch offices/home routers do not use virtual IPs.
  • The mobile phones are largely untrusted: easily lost or stolen, many apps have malware, they can only tunnel to the central VPN and only access a very limited range of services.
  • The laptops are managed devices so they are trusted with access to more services. For efficiency they can connect directly to branch office/home VPNs as well as the central server.
  • Smart-phone user browsing habits are systematically monitored by mobile companies with insidious links into mass-media and advertising. Road-warriors sometimes plug-in at client sites or hotels where IT staff may monitor their browsing. Therefore, all these users want to tunnel all their browsing through the VPN.
  • The central VPN gateway/firewall is running strongSwan VPN and Shorewall firewall on Linux. It could be Debian, Fedora or Ubuntu. Other open source platforms such as OpenBSD are also very well respected for building firewall and VPN solutions, but Shorewall, which is one of the key ingredients in this recipe, only works on Linux at present.
  • The branch office/home network could be another strongSwan/Shorewall server or it could also be an OpenWRT router
  • The default configuration for most wifi routers creates a bridge joining wifi users with wired LAN users. Not here, it has been deliberately configured as an independent network. Road-warriors who attach to the wifi must use VPN tunnelling to access the local wired network. In OpenWRT, it is relatively easy to make the Wifi network an independent subnet. This is an essential security precaution because wifi passwords should not be considered secure: they are often transmitted to third parties, for example, by the cloud backup service in many newer smart phones.
Package mayhem The major components are packaged on all the major Linux distributions. Nonetheless, in every case, I found that it is necessary to re-compile fresh strongSwan packages from sources. It is not so hard to do but it is necessary and worth the effort. Here are related blog entries where I provide the details about how to re-build fresh versions of the official packages with all necessary features enabled: Using X.509 certificates as a standard feature of the VPN For convenience, many people building a point-to-point VPN start with passwords (sometimes referred to as pre-shared keys (PSK)) as a security mechanism. As the VPN grows, passwords become unmanageable. In this solution, we only look at how to build a VPN secured by X.509 certificates. The certificate concept is not hard. In this scenario, there are a few things that make it particularly easy to work with certificates:
  • strongSwan comes with a convenient command line tool, ipsec pki. Many of it's functions are equivalent to things you can do with OpenSSL or GnuTLS. However, the ipsec pki syntax is much more lightweight and simply provides a convenient way to do the things you need to do when maintaining a VPN.
  • Many of the routine activities involved in certificate maintainence can be scripted. My recent blog about using Android clients with strongSwan gives a sample script demonstrating how ipsec pki commands can be used to prepare a PKCS#12 (.p12) file that can be easily loaded into an Android device using the SD-card.
  • For building a VPN, there is no need to use a public certificate authority such as Verisign. Consequently, there is no need to fill in all their forms or make any payments for each device/certificate. Some larger organisations do choose to outsource their certificate management to such firms. For smaller organisations, an effective and sometimes better solution can be achieved by maintaining the process in-house with a private root CA.
UDP fragmentation during IPsec IKEv2 key exchange and ECDSA A common problem for IPsec VPNs using X.509 certificates is the fragmentation of key exchange datagrams during session setup. Sometimes it works, sometimes it doesn't. Various workarounds exist, such as keeping copies of all certificates from potential peers on every host. As the network grows, this becomes inconvenient to maintain and to some extent it eliminates the benefits of using PKI. Fortunately, there is a solution: Elliptic Curve Cryptography (ECC). Many people currently use RSA key-pairs. Best practice suggests using RSA keys of at least 2048 bits and often 4096 bits. Using ECC with a smaller 384-bit key is considered to be equivalent to a 7680 bit RSA key pair. Consequently, ECDSA certificates are much smaller than RSA certificates. Furthermore, at these key sizes, the key exchange packets are almost always smaller than the typical 1500 byte MTU. A further demand for ECDSA is arising due to the use of ECC within smart cards. Many smartcards don't support any RSA key larger than 2048 bits. The highly secure 384-bit ECC key is implemented in quite a few common smart cards. Smart card vendors have shown a preference for the ECC keys due to the US Government's preference for ECC and the lower computational overheads make them more suitable for constrained execution environments. Anyone who wants to use smart cards as part of their VPN or general IT security now, or in the future, needs to consider ECC/ECDSA. Making the network simple with Shorewall zones For this example, we are not just taking some easy point-to-point example. We have a real-world, multi-site, multi-device network with road warriors. Simplifying this architecture is important to help us understand and secure it. The solution? Each of these is abstracted to a "zone" in Shorewall. In the diagram above, the zone names are in square brackets. The purpose of each zone is described below:
Zone name Description
loc This is the private LAN and contains servers like databases, private source code respositories and NFS file servers
dmz This is the DMZ and contains web servers that are accessible from the public internet. Some of these servers talk to databases or message queues in the LAN network loc
vpn_a These are road warriors that are not very trustworthy, such as mobile devices. They are occasionally stolen and usually full of spyware (referred to by users as "apps"). They have limited access to ports on some DMZ servers, e.g. for sending and receiving mail using SMTP and IMAP (those ports are not exposed to the public Internet at large). They use the VPN tunnel for general internet access/browsing, to avoid surveillance by their mobile carrier.
vpn_b These are managed laptops that have a low probability of malware infection. They may well be using smart cards for access control. Consequently, they are more trusted than the vpn_a users and have access to some extra intranet pages and file servers. Like the smart-phone users, they use the VPN tunnel for general internet access/browsing, to avoid surveillance by third-party wifi hotspot operators.
vpn_c This firewall zone represents remote sites with managed hardware, such as branch offices or home networks with IPsec routers running OpenWRT.
cust These are servers hosted for third-parties or collaborative testing/development purposes. They have their own firewall arrangements if necessary.
net This zone represents traffic from the public Internet.
A practical Shorewall configuration Shorewall is chosen to manage the iptables and ip6tables firewall rules. Shorewall provides a level of abstraction that makes netfilter much more manageable than manual iptables scripting. The Shorewall concept of zones is very similar to the zones implemented in OpenWRT and this is an extremely useful paradigm for firewall management. Practical configuration of Shorewall is very well explained in the Shorewall quick start. The one thing that is not immediately obvious is a strategy for planning the contents of the /etc/shorewall/policy and /etc/shorewall/rules files. The exact details for making it work effectively with a modern IPsec VPN are not explained in a single document, so I've gathered those details below as well. An effective way to plan the Shorewall zone configuration is with a table like this:
Destination zone
loc dmz vpn_a vpn_b vpn_c cust net
Source zone loc \
dmz ? \ X X X
vpn_a ? ? \ X X
vpn_b ? ? X \ X
vpn_c X \
cust X ? X X X \
net X ? X X X \
The symbols in the table are defined:
Symbol Meaning
ACCEPT in policy file
X REJECT or DROP in policy file
? REJECT or DROP in policy file, but ACCEPT some specific ports in the rules file
Naturally, this modelling technique is valid for both IPv4 and IPv6 firewalling (with Shorewall6) Looking at the diagram in two dimensions, it is easy to spot patterns. Each pattern can be condensed into a single entry in the rules file. For example, it is clear from the first row that the loc zone can access all other zones. That can be expressed very concisely with a single line in the policy file:
loc all ACCEPT
Specific Shorewall tips for use with IPsec VPNs and strongSwan Shorewall has several web pages dedicated to VPNs, including the IPsec specific documentation.. Personally, I found that I had to gather a few details from several of these pages to make an optimal solution. Here are those tips:
  • Ignore everything about the /etc/shorewall/tunnels file. It is not needed and not used any more
  • Name the VPN zones (we call them vpn_a, vpn_b and vpn_c) in the zones file but there is no need to put them in the /etc/shorewall/interfaces file.
  • The /etc/shorewall/hosts file is not just for hosts and can be used to specify network ranges, such as those associated with the VPN virtual IP addresses. The ranges you put in this file should match the rightsourceip pool assignments in strongSwan's /etc/ipsec.conf
  • One of the examples suggests using mss=1400 in the /etc/shorewall/zones file. I found that is too big and leads to packets being dropped in some situations. To start with, try a small value such as 1024 and then try larger values later after you prove everything works. Setting mss for IPsec appears to be essential.
  • Do not use the routefilter feature in the /etc/shorewall/interfaces file as it is not compatible with IPsec
Otherwise, just follow the typical examples from the Shorewall quick start guide and configure it to work the way you want. Here is an example /etc/shorewall/zones file:
fw firewall
net ipv4
dmz ipv4
loc ipv4
cust ipv4
vpn_a ipsec mode=tunnel mss=1024
vpn_b ipsec mode=tunnel mss=1024
vpn_c ipsec mode=tunnel mss=1024
Here is an example /etc/shorewall/hosts file describing the VPN ranges from the diagram:
vpn_a eth0:10.1.100.0/24 ipsec
vpn_b eth0:10.1.200.0/24 ipsec
vpn_c eth0:192.168.1.0/24 ipsec
Here is an example /etc/shorewall/policy file based on the table above:
loc all ACCEPT
vpn_c all ACCEPT
cust net ACCEPT
net cust ACCEPT
all all REJECT
Here is an example /etc/shorewall/rules file based on the network:
SECTION ALL
# allow connections to the firewall itself to start VPNs:
# Rule  source    dest    protocol/port details
ACCEPT   all       fw                ah
ACCEPT   all       fw                esp
ACCEPT   all       fw                udp 500
ACCEPT   all       fw                udp 4500
# allow access to HTTP servers in DMZ:
ACCEPT   all       dmz               tcp 80
# allow connections from HTTP servers to MySQL database in private LAN:
ACCEPT   dmz       loc:10.2.0.43     tcp 3306
# allow connections from all VPN users to IMAPS server in private LAN:
ACCEPT vpn_a,vpn_b,vpn_c loc:10.2.0.58 tcp 993
# allow VPN users (but not the smartphones in vpn_a) to the
# PostgresQL database for PostBooks accounting system:
ACCEPT vpn_b,vpn_c loc:10.2.0.48      tcp 5432
SECTION ESTABLISHED
ACCEPT   all       all
SECTION RELATED
ACCEPT   all       all
Once the files are created, Shorewall can be easily activated with:
# shorewall compile && shorewall restart
strongSwan IPsec VPN setup Like Shorewall, strongSwan is also very well documented and I'm just going to focus on those specific areas that are relevant to this type of VPN project.
  • Allow the road-warriors to send all browsing traffic over the VPN means including leftsubnet=0.0.0.0/0 in the VPN server's /etc/ipsec.conf file. Be wary though: sometimes the road-warriors start sending the DHCP renewal over the tunnel instead of to their local DHCP server.
  • As we are using Shorewall zones for firewalling, you must set the options leftfirewall=no and lefthostaccess=no in ipsec.conf. Shorewall already knows about the remote networks as they are defined in the /etc/shorewall/hosts file and so firewall rules don't need to be manipulated each time a tunnel goes up or down.
  • As discussed above, X.509 certificates are used for peer authentication. In the certificate Distinguished Name (DN), store the zone name in the Organizational Unit (OU) component, for example, OU=vpn_c, CN=gw.branch1.example.org
  • In the ipsec.conf file, match the users to connections using wildcard specifications such as rightid="OU=vpn_a, CN=*"
  • Put a subjectAltName with hostname in every certificate. The --san option to the ipsec pki commands adds the subjectAltName.
  • Keep the certificate distinguished names (DN) short, this makes the certificate smaller and reduces the risk of fragmented authentication packets. Many examples show a long and verbose DN such as C=GB, O=Acme World Wide Widget Corporation, OU=Engineering, CN=laptop1.eng.acme.example.org. On a private VPN, it is rarely necessary to have all that in the DN, just include OU for the zone name and CN for the host name.
  • As discussed above, use the ECDSA scheme for keypairs (not RSA) to ensure that the key exchange datagrams don't suffer from fragmentation problems. For example, generate a keypair with the command ipsec pki --gen --type ecdsa --size 384 > user1Key.der
  • Road warriors should have leftid=%fromcert in their ipsec.conf file. This forces them to use the Distinguished Name and not the subjectAltName (SAN) to identify themselves.
  • For road warriors who require virtual tunnel IPs, configure them to request both IPv4 and IPv6 addresses (dual stack) with leftsourceip=%config4,%config6 and on the VPN gateway, configure the ranges as arguments to rightsourceip
  • To ensure that roadwarriors query the LAN DNS, add the DNS settings to strongswan.conf and make sure road warriors are using a more recent strongSwan version that can dynamically update /etc/resolv.conf. Protecting DNS traffic is important for privacy reasons. It also makes sure that the road-warriors can discover servers that are not advertised in public DNS records.
Central firewall/VPN gateway configuration Although they are not present in the diagram, IPv6 networks are also configured in these strongSwan examples. It is very easy to combine IPv4 and IPv6 into a single /etc/ipsec.conf file. As long as the road-warriors have leftsourceip=%config4,%config6 in their own configurations, they will operate dual-stack IPv4/IPv6 whenver they connect to the VPN. Here is an example /etc/ipsec.conf for the central VPN gateway:
config setup
        charonstart=yes
        charondebug=all
        plutostart=no
conn %default
        ikelifetime=60m
        keylife=20m
        rekeymargin=3m
        keyingtries=1
        keyexchange=ikev2
conn branch1
        left=198.51.100.1
        leftsubnet=203.0.113.0/24,10.0.0.0/8,2001:DB8:12:80:/64
        leftcert=fw1Cert.der
        leftid=@fw1.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid=@branch1-fw.example.org
        rightsubnet=192.168.1.0/24
        auto=add
conn rw_vpn_a
        left=198.51.100.1
        leftsubnet=0.0.0.0/0,::0/0
        leftcert=fw1Cert.der
        leftid=@fw1.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid="OU=vpn_a, CN=*"
        rightsourceip=10.1.100.0/24,2001:DB8:1000:100::/64
        auto=add
conn rw_vpn_b
        left=198.51.100.1
        leftsubnet=0.0.0.0/0,::0/0
        leftcert=fw1Cert.der
        leftid=@fw1.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid="OU=vpn_b, CN=*"
        rightsourceip=10.1.200.0/24,2001:DB8:1000:200::/64
        auto=add
Sample branch office or home router VPN configuration Here is an example /etc/ipsec.conf for the Linux server or OpenWRT VPN at the branch office or home:
conn head_office
        left=%defaultroute
        leftid=@branch1-fw.example.org
        leftcert=branch1-fwCert.der
        leftsubnet=192.168.1.0/24,2001:DB8:12:80:/64
        leftfirewall=no
        lefthostaccess=no
        right=fw1.example.org
        rightid=@fw1.example.org
        rightsubnet=203.0.113.0/24,10.0.0.0/8,2001:DB8:1000::/52
        auto=start
# notice we only allow vpn_b users, not vpn_a
# these users are given virtual IPs from our own
# 192.168.1.0 subnet
conn rw_vpn_b
        left=branch1-fw.example.org
        leftsubnet=192.168.1.0/24,2001:DB8:12:80:/64
        leftcert=branch1-fwCert.der
        leftid=@branch1-fw.example.org
        leftfirewall=no
        lefthostaccess=no
        right=%any
        rightid="OU=vpn_b, CN=*"
        rightsourceip=192.168.1.160/27,2001:DB8:12:80::8000/116
        auto=add
Further topics Shorewall and Shorewall6 don't currently support a unified configuration. This can make it slightly tedious to duplicate rules between the two IP variations. However, the syntax for IPv4 and IPv6 configuration is virtually identical. Shorewall only currently supports Linux netfilter rules. In theory it could be extended to support other types of firewall API, such as pf used by OpenBSD and the related BSD family of systems. A more advanced architecture would split the single firewall into multiple firewall hosts, like the inner and outer walls of a large castle. The VPN gateway would also become a standalone host in the DMZ. This would require more complex routing table entries. Smart cards with PIN numbers provide an effective form of two-factor authentication that can protect certificates for remote users. Smart cards are documented well by strongSwan already so I haven't repeated any of that material in this article. Managing a private X.509 certificate authority in practice may require slightly more effort than I've described, especially when an organisation grows. Small networks and home users don't need to worry about these details too much, but for most deployments it is necessary to consider things like certificate revocation lists and special schemes to protect the root certificate's private key. EJBCA is one open source project that might help. Some users may want to consider ways to prevent the road-warriors from accidentally browsing the Internet when the IPsec tunnel is not active. Such policies could be implemented with some basic iptables firewalls rules in the road-warrior devices. Summary Using these strategies and configuration tips, planning and building a VPN will hopefully be much simpler. Please feel free to ask questions on the mailing lists for any of the projects discussed in this blog.

24 June 2013

Daniel Pocock: Private WANs may be less secure than VPNs

The latest round of Snowden revelations concern a British GCHQ program dubbed "Mastering the Internet (MTI)". The program involves, among other things, tapping the world's under-sea fibre-optic cables and systematically monitoring all communications. One of the most significant facts that appears is the benefit of using IPsec and VPNs over the public Internet instead of trusting un-encrypted traffic to pass over private WAN connections. The definition of a private WAN connection is quite broad: many ISPs in the UK, for example, use the British Telecom OpenReach platform to tunnel DSL connections back to their data centers and offer virtual WANs to their business customers. Can any wifi be trusted? It should be assumed that all Wifi networks can be compromised at-will by a sophisticated attacker. This is another situation where VPNs may be part of the solution: even WPA2 secured wifi segments should be treated as untrusted public Internet networks and not bridged or routed onto any private subnets. Users with devices on a wifi network should be forced to use a VPN over wifi connection, even when the wifi is within the same building as the private network. Tor may be compromised by an attacker with this much data As I've mentioned in previous blogs with a focus on real-time communications there is no such thing as perfect privacy. Simply encrypting data with IPsec does not prevent the attacker knowing the volume of data transmitted or the time it was transmitted. Using a service like Tor for onion-routing provides some benefit. However, an attacker with sufficient infiltration of many points in the network may be able to deduce the real origins of a data transfer over Tor using some combination of packet sizes and timestamps. "Mastering the Internet (MTI)" may have sufficient coverage to achieve that. Tor could overcome this by putting dummy data into the network and adding fragmentation, but such schemes may add latency, reduce throughput and require significant effort to upgrade existing users. Gaining intelligence from encrypted data VoIP signals are typically compressed using some type of codec. These are dedicated, lossy compression schemes optimised for voice signals. Some codecs use a fixed compression ratio. Other codecs use a variable compression ratio (VBR): a simple example involves sending shorter packets to indicate a period of silence. Encrypting these packets prevents an attacker seeing what is inside them but the attacker can easily detect the lengths of the packets. Researchers have demonstrated that statistical techniques can be applied to VBR dataflows to identify syllables used in speech and then make a hypothesis about what has been said. (This would be an interesting hack to try and reproduce by collecting data with tcpdump and analysing it with R) Identifying phrases in encrypted VoIP using statistical analysis In many cases like those described, it is necessary to go beyond default encryption settings and put extra dummy data or frame padding into the data flow to gain a more fool-proof level of privacy. Simply connecting VoIP phones to an IPsec network doesn't automatically make them secure. The rise of the free-software VPN Even if the default configurations are not always universally resilient to every type of attack, many free software prodcuts provide an excellent building block for VPNs. Having the source code and specifications on hand, free software developers can then innovate to mix-and-match different technologies or add random data flows to mask the real activity on the network. In a recent hosting migration project, I successfully used the rather trivial VPN package tinc to bridge the ethernet segments between virtual machines in the old and new networks. This allowed virtual machines to be moved one-by-one over a period of a week. This is a remarkably simple solution that can protect the integrity of communications on a private subnet. A more comprehensive solution is strongSwan which provides support for industry-standard IPsec in various topologies. They now offer a full IPsec client for Android phones (hopefully it will be made available on f-droid: using a Google account to access Google Play may result in Google accidentally helping you "back up" your settings, including credentials, to their cloud, undermining the privacy of the system) An IPsec VPN can be deployed to all hosts on a network: then they can use opportunistic encryption for any type of communication without needing to worry about the underlying network. Furthermore, it is possible to add smart cards to further boost VPN security while using free software for the whole solution. For those interested in looking beyond Linux solutions, the *BSD range of operating systems, particularly OpenBSD provide a compelling alternative. This paper about OpenBSD's iked, which is also available on other platforms as OpenIKEd is very recent and provides a thorough overview of it's current status. One notable limitation is that their MOBIKE support is not yet complete. Many older VPNs likely to need an ugprade When I first started deploying Linux VPNs in the 90s the market was favourable for any type of Linux VPN. In extreme cases (which were surprisingly common at the time) companies accustomed to paying $20,000 per month for a leased line would happily pay a flat fee of $100,000 to purchase a VPN that was cobbled together using less than $10,000 of commodity hardware and some free software. Many of the first generation of these VPNs were built using RSA key lengths and algorithms no longer considered secure, such as DES. Some of these solutions (or products derived from them) may still remain in use today: managers may be slightly embarrassed to put aside their $100,000 VPN and replace it with a $50 router that is more secure. While that is the more extreme example, there are many intermediate solutions out there in the field. There are clear opportunities here to upgrade users to free software solutions.

9 June 2013

Daniel Pocock: The Pope saw right through them


Popemobile - is it secure enough for the Pope to make confession without the NSA hearing anything? In his final days as Pope, Benedict XVI took it upon himself to comment on a political subject that has had the whole world talking for some time: those invasive airport scanners pushed by US securocrats. "Every action, it is above all essential to protect and value the human person in their integrity." "Even in this situation, one must never forget that respecting the primacy of the human person and attention to his or her needs does not make the service less efficient nor penalise economic management." At least, the press thought he was talking about body scanners. Maybe it was divine intervention, or maybe just a very good understanding of the dark side of human nature, but these comments could be equally applicable to the unfolding scandal engulfing the NSA as they seek to monitor, record and understand every thought and feeling we have at every moment of our lives. On the issue of those scanners, however, it now appears clear that the reassurances we've been hearing - that invasive airport body scanners don't save copies of the pictures, for example - are about as believable as the ousted Iraqi Information minister Mohammed Saeed al-Sahaf appearing daily on TV to reassure the world that Saddam would be victorious even as the invading forces entered Baghdad. Saddam's former Information Minister, Mohammed Saeed al-Sahaf
Saddam's former Information Minister, Mohammed Saeed al-Sahaf

Ingo Juergensmann: Edward Snowden whistleblowed PRISM

Sometimes there are true heros. Even today. Like Edward Snowden who made PRISM publically known. There's an interview by The Guardian with Edward Snowden:
In a note accompanying the first set of documents he provided, he wrote: "I understand that I will be made to suffer for my actions," but "I will be satisfied if the federation of secret law, unequal pardon and irresistible executive powers that rule the world that I love are revealed even for an instant." [...] He has had "a very comfortable life" that included a salary of roughly $200,000, a girlfriend with whom he shared a home in Hawaii, a stable career, and a family he loves. "I'm willing to sacrifice all of that because I can't in good conscience allow the US government to destroy privacy, internet freedom and basic liberties for people around the world with this massive surveillance machine they're secretly building."
Neither Bradley Manning nor Edward Snowden should be sentenced, but the Government that is responsible for such surveilance programs like PRISM should.
Kategorie:
Tags:

6 February 2013

Biella Coleman: Edward Tufte was a phreak

It has been so very long since I have left a trace here. I guess moving to two new countries (Canada and Quebec), starting a new job, working on Anonymous, and finishing my first book was a bit much. I miss this space, not so much because what I write here is any good. But it a handy way for me to keep track of time and what I do and even think. My life feels like a blur at times and hopefully here I can see its rhythms and changes a little more clearly if I occasionally jot things down here. So I thought it would nice to start with something that I found surprising: famed information designer, Edward Tufte, a professor emeritus at Yale was a phone phreak (and there is a stellar new book on the topic by former phreak Phil Lapsley. He spoke about his technological exploration during a sad event, a memorial service in NYC which I attended for the hacker and activist Aaron Swartz. I had my wonderful RA transcribe the speech, so here it is [we may not have the right spelling for some of the individuals so please let us know of any mistakes]:
Edward Tufte s Speech From Aaron Swartz s Memorial
Speech starts 41:00 [video cuts out in beginning]
We would then meet over the years for a long talk every now and then, and my responsibility was to provide him with a reading list, a reading list for life and then about two years ago Quinn had Aaron come to Connecticut and he told me about the four and a half million downloads of scholarly articles and my first question is, Why isn t MIT celebrating this? .
[Video cuts out again]
Obviously helpful in my career there, he then became president of the Mellon foundation, he then retired from the Mellon foundation, but he was asked by the Mellon foundation to handle the problem of JSTOR and Aaron. So I wrote Bill Bullen(sp?) an email about it, I said first that Aaron was a treasure and then I told a personal story about how I had done some illegal hacking and been caught at it and what happened. In 1962, my housemate and I invented the first blue box, that s a device that allows for free, undetectable, unbillable long distance telephone calls. And we got this up and played around with it and the end of our research came when we concluded what was the longest long distance call ever made, which was from Palo Alto to New York time-of-day via Hawaii, well during our experimentation, AT&T, on the second day it turned out, had tapped our phone and uh but it wasn t until about 6 months later when I got a call from the gentleman, AJ Dodge, senior security person at AT&T and I said, I know what you re calling about. and so we met and he said You what you are doing is a crime that would , you know all that. But I knew it wasn t serious because he actually cared about the kind of engineering stuff and complained that the tone signals we were generating were not the standard because they record them and play them back in the network to see what numbers they we were that you were trying to reach, but they couldn t break though the noise of our signal. The upshot of it was that uh oh and he asked why we went off the air after about 3 months, because this was to make long distance telephone calls for free and I said this was because we regarded it as an engineering problem and we made the longest long distance call and so that was it. So the deal was, as I explained in my email to Bill Bullen, that we wouldn t try to sell this and we were told, I was told that crime significance would pay a great deal for this, we wouldn t do any more of it and that we would turn our equipment over to AT&T, and so they got a complete vacuum tube isolator kit for making long distance phone calls. But I was grateful for AJ Dodge and I must say, AT&T that they decided not to wreck my life. And so I told Bill Bullen that he had a great opportunity here, to not wreck somebody s life, course he thankfully did the right thing.
Aaron s unique quality was that he was marvelously and vigorously different. There is a scarcity of that. Perhaps we can be all a little more different too.
Thank you very much.

27 October 2012

Russ Allbery: Review: Devil Take the Hindmost

Review: Devil Take the Hindmost, by Edward Chancellor
Publisher: Plume
Copyright: 1999
Printing: June 2000
ISBN: 0-452-28180-6
Format: Trade paperback
Pages: 349
Subtitled A History of Financial Speculation, Devil Take the Hindmost is exactly that: a history, with chapters on notable speculative bubbles or surges going back to the 1630s. The first significant bubble described is, of course, the notorious tulip speculation in Holland, but Chancellor is more thorough than any previous history I've read. There are chapters on the early stock market schemes, the South Sea bubble, 1820s emerging markets, the railway mania of 1845, the Gilded Age, the late 1970s and early 1980s, and the Japanese real estate bubble of the 1980s, as well as the mandatory look at the stock market crash of 1929. Each is thorough and detailed; indeed, perhaps slightly too detailed, as the clear narrative picture can occasionally groan under the weight of names and capsule biographies. The first thing one should realize about this book is that, despite the apparent currency of the topic, it was written in 1999. That means it predates both the subprime real estate collapse and the dot.com boom, as well as some more minor but possibly relevant events such as the Enron electricity market manipulation. Japan is the most recent bubble that Chancellor can cover; one can only speculate as to what he would make of the 2008 financial crash. The other focus to be aware of is the omission of sovereign debt. Chancellor is exclusively focused on private markets, usually stock markets, and (apart from the chapter on Japan) Western markets, mostly in the US and the UK. Governments play a role in some of the early speculative bubbles (most notably the South Sea bubble), but Chancellor's topic is private speculative frenzies, not the risks and effects of government borrowing. Interrogation of the role of government is mostly limited to cursory discussions of the impact of regulation, mostly in the negative. More on that in a moment. This is a history as opposed to either a polemic or a prescription. Chancellor does not set out to explain how to curb financial speculation. Insofar as he expresses an opinion on it, he presents speculation as a force of unbridled freedom in natural tension with regulation and financial conservatism and sees a cyclical pattern of speculative frenzy, collapse, retrenchment, and then renewed speculation. Speculation, as presented here, is more akin to a force of nature than, as seen in most other books on the topic, a phenomenon of greed that should be limited or prevented. Indeed, Chancellor seems skeptical of the entire project of preventing or limiting speculation, and presents considerable reason for the reader to become skeptical as well. The history he presents is one of speculative manias adapting to and circumventing every control that governments and markets have attempted to place on them. When certain practices are outlawed (and some modern complex financial practices have surprisingly long histories), people just find loopholes or alternative approaches through which to accomplish the same thing. Or the speculative mania takes an unexpected new form. Nor do the manias seem to be decreasing in severity as more regulation, custom, and restrictions attempt to blunt them. Even without adding the financial collapse of 2008, the impacts of the speculative frenzies documented here seem, if anything, to be growing. This is, therefore, a relatively straightforward history, but it's an entertaining one, filling in details that I'd not previously been aware of. The complexity of financial schemes as early as the 1600s was particularly eye-opening. There's a tendency to believe that complex derivative schemes are an artifact of modern financial markets, but they've been a part of speculation for as long as Chancellor goes back. Computers may have added layers to the mathematics, but they haven't added as much to the legal strategems and levels of indirection as one might think. Chancellor emphasizes that throughout: speculation involves a lot of complexity, not necessarily to deceive but to construct situations in which it's in the interest (at least short term) of all involved parties to continue and escalate the speculation. While Chancellor doesn't provide much encouragement to the reader hoping to find ways to curb speculation, he does show some commonalities that provide a bit of a handle on how speculation starts. The one that lept out at me was leverage. Every speculative bubble documented in Devil Take the Hindmost involves borrowing money to reinvest. The mechanism keeps changing, from straightforward loans during the tulip craze to much more complex constructions in modern finance, primarily in response to regulation that attempted to reduce leverage but only banned one specific technique. But whatever method is used, at the bottom is always some scheme for purchasing an asset using debt, magnifying both positive and negative price swings. Some variation of buying stocks on margin is particularly common; Chancellor documents how it was reinvented despite the limits on straight margin purchases instituted after the Great Depression. There is no recommendation here for any regulatory or legal action, but it's obvious from the repetition of history that limiting leverage would be the place to focus attention. I enjoyed this history, but it can be dry, and at times is a bit overstuffed with names, unimportant details, and microbiographies. It's also unfortunate that Chancellor missed two of the most significant (and two very different) speculative bubbles through the timing of his work. It would have been interesting to see them through the lens of his history. What we have is still worth reading, but also somewhat frustrating in Chancellor's refusal to draw or at least underline general conclusions. Devil Take the Hindmost is long on history but short on analysis, and while I appreciate the historian's focus in a source of data, it left me somewhat unsatisfied. Rating: 6 out of 10

27 May 2012

Lars Wirzenius: Obnam 0.29 (backup software)

I've just pushed out Obnam 0.29, my backup program. NEWS snippet below. This is a RELEASE CANDIDATE for 1.0, since there are no known bugs that would block a 1.0 release. I'm not entirely happy with Obnam's performance over sftp with small files, but it's not something I am prepared to let block 1.0. I am going to try a few things to improve things, but I want this release out first. Please test and report any problems via this mailing list or as bugs on the website or via IRC. I am on IRC only intermittently until 1.0 is released, so e-mail and bug reports are best options.

9 April 2012

Gunnar Wolf: Xilitla

On this Semana Santa (holy/major week), Regina and I took a little vacation: We went ~400Km North, to the magical Xilitla, in the Eastern part of San Luis Potos state. To get there, we went by the Sierra Gorda de Quer taro route: A beautiful but quite hard to drive road, crossing desert, forest and jungle through a very steep mountain ridge. What does hard to drive mean? It means that for ~200Km we had a speed average of 40-50Km/h. The road is in very good conditions, and traffic was quite light. And although our plans were to come back via the other ridge road (crossing Hidalgo state instead of Quer taro), we were persuaded to go the long way instead: We came back via San Luis Potos city, making ~700Km instead of ~400, but I'll concede it was a much easier drive. But although I take the road as an important part of the vacation, and although it was a very quick vacation, what is it we went to see there? Xilitla is a town at the beginning of the huasteca potosina region, with really exuberant vegetation, that captured Sir Edward James' heart back in the 1940s. Sir Edward, a noble Englishman, was good friends with several surrealist artists, and became one himself. After moving to Xilitla and buying an impressive chunk of jungle, in the 1960s he started building a surrealist garden in the middle of the jungle, which he continued to work on until his death, in 1984. We took some pictures, but of course, they pay very little tribute to the magic and beauty of the place. And going to the huasteca means going to places of nature, of many crystaline rivers. Yes, only three days (two of them spent getting there and back) are far too little to enjoy it. But even so, we went to the birth of river Huichihuay n (~45 minutes North of Xilitla) and to the Los Micos waterfalls (~20 minutes West of Ciudad Valles). Very nice places to visit, among so many others. We should go back to the huasteca soon! I uploaded many of the pictures here. They will not be syndicated on the planets that follow my blog on RSS (or for individuals following RSS, FWIW), but you will find them following the relevant links. And of course: I pay for a very cheap package on my hosting provider. Drupal often answers with an error page when the server is (even mildly) overloaded. So, feel free to hit reload if something appears unavailable.

1 September 2011

Matthew Garrett: The Android/GPL situation

There was another upsurge in discussion of Android GPL issues last month, triggered by couple of posts by Edward Naughton, followed by another by Florian Mueller. The central thrust is that section 4 of GPLv2 terminates your license on violation, and you need the copyright holders to grant you a new one. If they don't then you don't get to distribute any more copies of the code, even if you've now come into compliance. TLDR; most Android vendors are no longer permitted to distribute Linux.

I'll get to that shortly. There's a few other issues that could do with some clarification. The first is Naughton's insinuation that Google are violating the GPL due to Honeycomb being closed or their "license washing" of some headers. There's no evidence whatsoever that Google have failed to fulfil their GPL obligations in terms of providing source to anyone who received GPL-covered binaries from them. If anyone has some, please do get in touch. Some vendors do appear to be unwilling to hand over code for GPLed bits of Honeycomb. That's an issue with the vendors, not Google.

His second point is more interesting, but the summary is "Google took some GPLed header files and relicensed them under Apache 2.0, and they've taken some other people's GPLv2 code and put it under Apache 2.0 as well". As far as the headers go, there's probably not much to see here. The intent was to produce a set of headers for the C library by taking the kernel headers and removing the kernel-only components. The majority of what's left is just structure definitions and function prototypes, and is almost certainly not copyrightable. And remember that these are the headers that are distributed with the kernel and intended for consumption by userspace. If any of the remaining macros or inline functions are genuinely covered by the GPLv2, any userspace application including them would end up a derived work. This is clearly not the intention of the authors of the code. The risk to Google here is indistinguishable from zero.

How about the repurposing of other code? Naughton's most explicit description is:

For example, Android uses bootcharting logic, which uses the 'bootchartd' script provided by www.bootchart.org, but a C re-implementation that is directly compiled into our init program. The license that appears at www.bootchart.org is the GPLv2, not the Apache 2.0 license that Google claims for its implementation.

, but there's no indication that Google's reimplementation is a derived work of the GPLv2 original.

In summary: No sign that Google's violating the GPL.

Florian's post appears to be pretty much factually correct, other than this bit discussing the SFLC/Best Buy case:

I personally believe that intellectual property rights should usually be enforced against infringing publishers/manufacturers rather than mere resellers, but that's a separate issue.

The case in question was filed against Best Buy because Best Buy were manufacturing infringing devices. It was a set of own-brand Blu Ray players that incorporated Busybox. Best Buy were not a mere reseller.

Anyway. Back to the original point. Nobody appears to disagree that section 4 of the GPLv2 means that violating the license results in total termination of the license. The disagreement is over what happens next. Armijn Hemel, who has done various work on helping companies get back into compliance, believes that simply downloading a new copy of the code will result in a new license being granted, and that he's received legal advice that supports that. Bradley Kuhn disagrees. And the FSF seem to be on his side.

The relevant language in v2 is:

You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License.

The relevant language in v3 is:

You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License

which is awfully similar. However, v3 follows that up with:

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

In other words, with v3 you get your license back providing you're in compliance. This doesn't mesh too well with the assumption that you can get a new license by downloading a new copy of the software. It seems pretty clear that the intent of GPLv2 was that the license termination was final and required explicit reinstatement.

So whose interpretation is correct? At this point we really don't know - the only people who've tried to use this aspect of the GPL are the SFLC, and as part of their settlements they've always reinstated permission to distribute Busybox. There's no clear legal precedent. Which makes things a little awkward.

It's not possible to absolutely say that many Android distributors no longer have the right to distribute Linux. But nor is it possible to absolutely say that they haven't lost that right. Any sufficiently motivated kernel copyright holder probably could engage in a pretty effective shakedown racket against Android vendors. Whether they will do remains to be seen, but honestly if I were an Android vendor I'd be worried. There's plenty of people out there who hold copyright over significant parts of the kernel. Would you really bet on all of them being individuals of extreme virtue?

comment count unavailable comments

14 May 2011

Vincent Bernat: FVWM configuration

There is actually some buzz around Gnome 3 and the new release of Ubuntu switching to Unity. I thought this was a good occasion to make this small post to tell that I still uses FVWM whose latest version was released a month ago. FVWM is a very old window manager. So old that nobody really knows what the F stands for. It comes with a ugly configuration by default and can be configured to look even uglier. However, FVWM is powerful and highly configurable. Moreover, it is pretty lightweight.

My FVWM configuration My configuration of FVWM is available on GitHub. It requires the use of fvwm-crystal. If you are running Debian or Ubuntu, there is a package for this. This is a set of powerful configuration files for FVWM which add a lot of goodies and themes. Unfortunately, the project site seems to have disappared. You can still grab the sources from your nearest Debian mirror. While those configuration files were great, I did not like to have such a complex desktop with all those icons and menus. However, what is great with fvwm-crystal is you can overlay your own configuration. If a file is present in your home directory, it will take precedence over the one installed on the system. So, I just wrote a new recipe. What you may want to steal from my configuration is the ability to alter the configuration depending on the fact that you are using one or two screens. With one screen, you get:
  • an application panel (a launcher bar) on the upper left corner,
  • an icon manager (which will contained iconified windows) on the upper right corner,
  • a systray on the left of the icon manager
  • a pager (which shows various desktops) just below it,
  • two icon managers (which act as a taskbar) on the bottom, one for regular applications and the other one for terminal windows.
One screen setup of FVWM With two screens, the organization is different. The two icon managers at the bottom are on their own screens. The pager is between the two screens and the other elements are on the upper left of the right screen. Dual screen setup of FVWM If I am using a single screen and I plug another one, I do something like this to get my setup updated:
1
2
$ xrandr --output  VGA-1 --auto --output DVI-I-1 --auto --right-of VGA-1
$ killall trayer ; FvwmCommand Restart
I also uses conky, a light-weight system monitor. It is started in my .xsession, not through FVWM.

Building wallpapers for dual-screen One difficulty with a multi-head setup is how to get the background set properly? . I have made a small script for generating one. Use it like this:
1
$ ./build-wallpaper.py -d ~/.fvwm/wallpapers -c -t ~/tmp/mywallpaper.png
It will randomly select two wallpapers and put them together to look nice on your dual-screen setup. It will also correct the aspect ratio, either by cropping the image or centering the image. You need the following dependencies to make script work: python-xpyb and python-imaging. UPDATE: Thomas Adam pointed to Nitrogen, a graphical wallpaper utility supporting multi-head setups. It is a great tool if you want to set manually your wallpaper but it does not allow to select a random one. Nowadays, I grab most of my wallpapers from InterfaceLIFT.

4 January 2011

John Goerzen: Looking back at 2010: reading

A year ago, I posted my reading list for 2010. I listed a few highlights, and a link to my Goodreads page, pointing out that this wasn t necessarily a goal, just a list of things that sounded interesting. I started off with Homer s Iliad, which I tremendously enjoyed and found parallels to modern life surprisingly common in that ancient tale. I enjoyed it so much, in fact, that I quickly jumped to a book that wasn t on my 2010 list: The Odyssey. I made a somewhat controversial post suggesting that the Old Testament of the Bible can be read similar to how we read The Odyssey. Homer turned out to be much more exciting than I d expected. Jordan s Fires of Heaven (WoT #5) was a good read, though it is one of those books that sometimes is action-packed and interesting, and other times slow-moving and almost depressing. I do plan to continue with the series but I m not enjoying it as much as I did at first. War and Peace is something I started late last year. I m about 400 pages into it, which means I ve not even read a third of it yet. It has some moving scenes, and is a fun read overall, but the work it takes to keep all the many characters straight can be a bit frustrating at times. Harvey Cox s The Future of Faith was one of the highlights of the year. A thought-provoking read by someone that embraces both science and religion, and shows a vision of religion that returns to its earlier roots, less concerned about what particular truths a person believes in than it is about more fundamental issues. Marcus Borg s Jesus: Uncovering the Life, Teachings, and Relevance of a Religious Revolutionary began with a surprisingly engaging history lesson on how agriculture caused the formation of domination societies. It also described in a lot of detail how historians analyze ancient texts their drafting, copying, etc. It paints a vivid portrait of Jewish society in the time that Jesus would have lived, and follows the same lines of thought as Cox regarding religion finally moving past the importance of intellectual assent to a set of statements. Among books that weren t on my 2010 list, I also read and here I m not listing all of them, just some highlights: The Cricket on the Hearth in something of a Christmastime tradition of reading one of the shorter Dickens works. I enjoyed it, but not as much as I enjoyed A Christmas Carol last year. Perhaps I made up for that by watching Patrick Stewart as Scrooge instead. How to Disappear Completely was a fun short humorous read, with a very well-developed first-person narrative. Paralleling my interest in amateur radio, I read and studied three books in order to prepare myself for the different exams. In something of a surprise, I laughed a lot at Sh*t My Dad Says, which was more interesting and funny than I expected it to be. All I can say is that Justin s got quite the dad and quite the interesting childhood. I even read two other recent releases: The Politician (about John Edwards) and Game Change (about the 2008 presidential race). Both were interesting, vibrant, and mostly unsourced so hard to know exactly how much to take from them. And finally, reflecting on and travel before my first trip to Europe, Travel as a Political Act, which encourages us to find the fun in my cultural furniture rearranged and my ethnocentric self-assuredness walloped. And that was fun. Now to make up the 2011 list

21 November 2010

Russell Coker: Ruxcon 2010

Yesterday and today I attended Ruxcon the leading technical security conference in Australia [1]. The first lecture I attended was Breaking Linux Security Protections by Andrew Griffiths. This included a good overview of many current issues with Linux security. One thing that was particularly noteworthy was his mention of SE Linux policy, he cited the policy for the FTP server as an example of policy that can be regarded as too lax but also noted the fact that to get SE Linux used the policies had to be more liberal than we might desire. There is probably scope for someone to give a good lecture about how we are forced to make uncomfortable choices between making security features stronger and making them more usable.The next lecture I attended was Breaking Virtualisation by Endrazine. It makes me wonder how long it will be before someone cracks one of the major cloud hosting services such as EC2 it s not an appealing thought.Billy Rios gave a really interesting lecture titled Will it Blend? about blended exploits. The idea is to try and find a few programs which do things that are slightly undesired (arguably not even bugs) but which when combined can result in totally cracking a system. One example was a way of tricking IE into loading a DLL from the desktop and a way of tricking Safari into saving arbitrary files to the desktop, combine them and you can push a DLL to a victim and make them load it. Learning about these things can really change the way you think about misbehaving programs!Ben Nagy gave an interesting lecture about Prospecting for Rootite . His systematic way of finding test cases that cover a large portion of the code of a large application such as MS-Word seems quite effective. Once you have test cases that cover a lot of code then you can use fuzzing to find flaws.Edward Farrell gave an informative lecture about RFID Security , I didn t really learn that much though, he confirmed my suspicions that RFID implementations generally suck.Mark Goudie gave a very informative lecture titled We ve been Hacked! What Went Wrong and Why . Mark works for Verizon and often with the US Secret Service in investigating security breaches. He presented a lot of information that I have not seen before and made some good arguments in support of companies being more proactive in protecting their systems from attack.Stephen Glass and Mark Robert gave a lecture titled Security in Public Safety Radio Systems which mainly focussed on digital radios used by the Australian police. It would be good if the police got people like them to test out new kit before ordering it in bulk, it seems that they will be using defective radios for a long time (it s not easy or cheap to replace them once they are deployed).Edward Farrell gave an interesting lecture titled Hooray for Reading: The Kindle & You about hacking the Kindle. Unfortunately they haven t worked out how to get GUI code going on a hacked Kindle yet so there are some limitations as to what can be done.I think that the most interesting lecture of the conference was This Job Makes you Paranoid by Alex Tilley of the Australian Federal Police. He gave some interesting anecdotes about real cases to illustrate his points and he advocated the police position really well. I ve attended several lectures by employees of law enforcement agencies, but none of them demonstrated anywhere near the understanding of their audience that Alex did.The last lecture I attended was Virtualisation Security State of the Union by David Jorn of Red Hat. He gave an interesting summary of some of the issues including mentioning how SE Linux is being used for confining KVM virtual machines.Ruxcon was a great conference and I definitely recommend attending it. I have to note that even though there are police attending and lecturing it s not entirely a white-hat affair. One thing that I hope they do next year is to get a bigger venue. The foyer was rather crowded and because it had a hard floor was really noisy between lectures. Space and carpet are two really important things when you have lots of people in one room!

22 July 2010

Alexander Reichle-Schmehl: Recent RC-Bug activity

Remember the one rc bug a days activities from Steinar and Zack? Well, maybe I won't make to one rc bug per day, but at least I could blog about some of my recent activities. Maybe it motivates some others? #553248 libnel-dev: missing-dependency-on-libc needed...
Trivial bug solved by adapting the depends-lines; uploaded.
#589344 missing symbols in library
Missing build depends; sponsored NMU by Davi Leal to delayed.
#582309 Links against libclamav
"solved" by disabling clamav support; uploaded to delayed 15.
#589819 Unmet Recommends on extremetuxracer-racer
Fixed typo in recommends (introduced by yours truly); uploaded.
#565805 FTBFS on kfreebsd-*: need to define type GLUTesselatorFunction
Applied patch by Felix Geyer; (doesn't help the release, package is experimental).
#568990 planetpenguin-racer-extras: Outdated; needs restructured and updated to be usefull
Upload fixed package prepared by Bertrand Marc.
#548084 aegir-provision: /etc/aegir/drushrc.php PHP error,
#548085 aegir-provision: /etc/aegir and /etc/aegir/drushrc.php file permissions
and unreported piuparts problem Uploaded to delayed.
#569177 kernel-patch-nfs-ngroups: doesn't apply anymore
Added an updated version of the patch; uploaded.
#580120 mediatomb allows anyone to browse and export the whole filesystem
Disabled user interface in configuration file; uploaded to delayed.
#588554 jamin: Ships files in /usr/lib64/
Patched configure-script to install to the correct place; with maintainers permission uploaded without delay.
#576901 init.d script fails under Squeeze with insserv due to lack of run level definitions
Updated LSB header of init script same was as upstream did in recent version; uploaded to delayed (got thanks from the maintainer for that).
#574624 dibbler-client: after removing package with dselect impossible to purge it.
Verified fix by Edward Welbourne and uploaded to delayed.
#586057 havp 0.91-1.1~volatile1 still haven't show up in lenny-volatile yet.
Seems to have been an infrastructure hick up, which got solved in the meantime. Closed without upload.
#518227 hplip: text missing at the bottom of a page on HP DJ 5550
Discussed with release team, severity got lowered.
As you can see, finding fixable rc bugs is certainly possible. So please step in :) BTW: If you don't have upload rights yourself, but prepared an NMU / patch, feel free to ping me and I'll take a look and try to sponsor your NMU. Just ask!

14 April 2010

Matt Zimmerman: Pop visualization

It s no surprise that information visualization and related fields have been active research topics in the past decade. The widespread availability of large, valuable data sets has created a need for improved tools and techniques for making sense of it. This is highly technical work, requiring expertise in design, mathematics, software and other fields, and there is a vibrant community of professionals and amateurs who experiment and critique the latest methods. In the geeky blogs that I follow, I see more and more amateur interest in visualization. Visualization is starting to take on political significance as well, as evidenced by the appointment of information design pioneer Edward Tufte to a US government advisory panel to help explain economic policy to the American public. A technology is confirmed to be truly and vitally relevant, though, when it enters the mainstream consciousness in the form of humor. In the case of visualization, geeks have been trading funny charts and graphs for many years, but this phenomenon seems to have finally crossed the chasm into popular Internet culture with the advent of the Venn diagram meme. Now, anyone with an elementary education and a drawing program can create their own chuckle-inducing infographics and enjoy their 15 minutes of Twitter fame. Likewise for the X as seen by Y two-dimensional grids, revealing how various (stereotyped) groups perceive each other. This is the future, becoming more evenly distributed by the minute.

1 March 2010

Rob Taylor: RDF Beginners Guide and Competition

The video of our talk at FOSDEM 2010 didn t come out great, so I ve made a slidecast of the RDF beginners guide that I gave as part of that talk. Enjoy below! At FOSDEM we also announced a competition for the coolest hack using RDF, SPARQL and Tracker with the prize kindly sponsored by Codethink. The prize is a Google Nexus One. After some discussion we ve decided to open up the competition to everyone and extend the deadline to the 15th of March. So if this tutorial inspires a great idea, get it coded up and submitted! On #tracker on GIMPNet there are great bunch of hackers who can help you get your idea up and running. There are no hard rules for the competition, we just want to see implementations of cool ideas. The code doesn t have to be perfect, the only requirement is that the judges can check it out and run it on their systems. All entries should be submitted to tracker-competition@codethink.co.uk by the deadline. <object height="300" width="400"><param name="allowfullscreen" value="true"><param name="allowscriptaccess" value="always"><param name="movie" value="http://vimeo.com/moogaloop.swf?clip_id=9848513&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1"><embed allowfullscreen="true" allowscriptaccess="always" height="300" src="http://vimeo.com/moogaloop.swf?clip_id=9848513&amp;server=vimeo.com&amp;show_title=1&amp;show_byline=1&amp;show_portrait=0&amp;color=&amp;fullscreen=1" type="application/x-shockwave-flash" width="400"></embed></object> RDF Beginner s Guide from Rob Taylor on Vimeo. Update!
Looks like I hit a bug in PiTiVi that caused the render to get mangled. I ve rerendered and it should all be good now. If Vimeo is being slow for you , you can download the ogg here. Edward, I promise to log a repro as soon as I can!

Next.

Previous.